This coursework focuses on housing prices, with the main objective being to predict the price of a property based on various inputs. The inputs include features such as the area, the number and types of rooms, and additional factors like the availability of a main road, hot water heating, and more.
The dependent variable is the price, as it is the primary concern for most people searching for a house. The goal of this work is to predict the price based on diverse inputs, which consist of mixed data types, such as:
This project addresses a regression problem because the objective is to predict a numeric value—in this case, the price of the property.
Now we are going to import our dataset into this project.
dt_houses <- fread(file = "./datasets/Regression_set.csv")
I would like to check, if i have some nullish data in my
dataset. I think it is a good idea to go through all rows and colums and
check, if there is a NA. I want to check it with built-in function in R
complete.cases(data_table). This function returns TRUE or FALSE
if row contains a NA value.
nas <- dt_houses[!complete.cases(dt_houses)]
nas
That looks great, now we can explore our dataset :)
Before we will explore our data, I want to import all libraries, which we will probably use:
library(data.table)
library(ggcorrplot)
library(ggExtra)
library(ggplot2)
library(ggridges)
library(ggsci)
library(ggthemes)
library(RColorBrewer)
library(svglite)
library(viridis)
library(scales)
library(rpart)
library(rpart.plot)
I found some helpful functions in R, so we could have a look on our data. We will start with a structure, than we will get some statistic data and take a head() of the data
str(dt_houses)
Classes ‘data.table’ and 'data.frame': 545 obs. of 13 variables:
$ price : int 13300000 12250000 12250000 12215000 11410000 10850000 10150000 10150000 9870000 9800000 ...
$ area : int 7420 8960 9960 7500 7420 7500 8580 16200 8100 5750 ...
$ bedrooms : int 4 4 3 4 4 3 4 5 4 3 ...
$ bathrooms : int 2 4 2 2 1 3 3 3 1 2 ...
$ stories : int 3 4 2 2 2 1 4 2 2 4 ...
$ mainroad : chr "yes" "yes" "yes" "yes" ...
$ guestroom : chr "no" "no" "no" "no" ...
$ basement : chr "no" "no" "yes" "yes" ...
$ hotwaterheating : chr "no" "no" "no" "no" ...
$ airconditioning : chr "yes" "yes" "no" "yes" ...
$ parking : int 2 3 2 3 2 2 2 0 2 1 ...
$ prefarea : chr "yes" "no" "yes" "yes" ...
$ furnishingstatus: chr "furnished" "furnished" "semi-furnished" "furnished" ...
- attr(*, ".internal.selfref")=<externalptr>
Statistic data:
summary(dt_houses[, .(price, area, bedrooms, bathrooms, stories, parking)])
price area bedrooms bathrooms stories parking
Min. : 1750000 Min. : 1650 Min. :1.000 Min. :1.000 Min. :1.000 Min. :0.0000
1st Qu.: 3430000 1st Qu.: 3600 1st Qu.:2.000 1st Qu.:1.000 1st Qu.:1.000 1st Qu.:0.0000
Median : 4340000 Median : 4600 Median :3.000 Median :1.000 Median :2.000 Median :0.0000
Mean : 4766729 Mean : 5151 Mean :2.965 Mean :1.286 Mean :1.806 Mean :0.6936
3rd Qu.: 5740000 3rd Qu.: 6360 3rd Qu.:3.000 3rd Qu.:2.000 3rd Qu.:2.000 3rd Qu.:1.0000
Max. :13300000 Max. :16200 Max. :6.000 Max. :4.000 Max. :4.000 Max. :3.0000
and this is a sample of our dataset:
head(dt_houses)
I would like to start from density of a main values, which are from my domain knowledge are important in price of the properties
Price density:
ggplot(data = dt_houses, aes(x = price)) +
geom_density(fill="#f1b147", color="#f1b147", alpha=0.25) +
labs(
x = 'Price',
y = 'Density'
) +
geom_vline(xintercept = mean(dt_houses$price), linetype="dashed") +
scale_x_continuous(labels = label_number(scale = 1e-6, suffix = "M")) +
theme_minimal() +
theme(axis.line = element_line(color = "#000000"))
It is very clear, that most of the prices are between 0 and ~ 5 million.
Area density:
ggplot(data = dt_houses, aes(x = area)) +
geom_density(fill="#f1b147", color="#f1b147", alpha=0.25) +
labs(
x = 'Price',
y = 'Density'
) +
theme_minimal() +
theme(axis.line = element_line(color = "#000000"))
Area density looks a little bit more centered, but still skewed to the left.
How does area affect price of the house? We will plot it with
points, where price is on the y-axis and area on x-axis.
ggplot() +
geom_point(data = dt_houses, aes(x = area, y = price, color = parking)) +
scale_y_continuous(labels = label_number(scale = 1e-6, suffix = "M")) +
theme_minimal() +
theme(axis.line = element_line(color = "#000000"))
This looks nice, and it is also logical, more space, higher price. But if we take a look at parking places, there is hard to see a trend.
But, now I have the simplest idea, how does amount of bedrooms correlates with the price.
ggplot(data = dt_houses, aes(x = factor(bedrooms), y = price)) +
geom_boxplot() +
theme_minimal()
We can see, that on average, more bedrooms, means higher price, but I think there is not really strong relationship between this two variables.
Also it would be great to take a look at a bedrooms histogram:
ggplot(data = dt_houses, aes(x = bedrooms)) +
geom_histogram(fill="#2f9e44", color="#2f9e44", alpha=0.25) +
geom_vline(xintercept = mean(dt_houses$bedrooms), linetype="dashed") +
theme_minimal() +
theme(axis.line = element_line(color = "#000000"))
mean of the bedrooms:
mean(dt_houses$bedrooms)
[1] 2.965138
Here we can see, that the most of the properties tend to have 2, 3 or 4 rooms.
Let’s have a look at histogram of stories:
ggplot(data = dt_houses, aes(x = stories)) +
geom_histogram(fill="#2f9e44", color="#2f9e44", alpha=0.25) +
geom_vline(xintercept = mean(dt_houses$stories), linetype="dashed") +
theme_minimal() +
theme(axis.line = element_line(color = "#000000"))
mean(dt_houses$stories)
[1] 1.805505
we can see, that most of the houses are 1-2 stories.
Bathrooms are also interesting variable, so let’s take a look at histogram and a Boxplot bathrooms and price:
ggplot(data = dt_houses, aes(x = bathrooms)) +
geom_histogram(fill="#2f9e44", color="#2f9e44", alpha=0.25) +
geom_vline(xintercept = mean(dt_houses$bathrooms), linetype="dashed") +
theme_minimal() +
theme(axis.line = element_line(color = "#000000"))
ggplot(data = dt_houses, aes(x = factor(bathrooms), y = price)) +
geom_boxplot() +
theme_minimal()
here it is also almost obvious, that, if we have more bathrooms, price will be also up. Only one disadvantage, that in my dataset I do not have enough data about properties with 3 or 4 bathrooms, I have some on 3, but really luck on 4.
Furnishing is also important, many people search for apartments with furniture, but furniture could be not in a best shape or buyer may do not like the style. So from my opinion, it is not as strong(in prediction), as for example area.
How much real estate furnished or not:
ggplot(data = dt_houses, aes(x = factor(furnishingstatus), fill = factor(furnishingstatus))) +
geom_bar(color="#ced4da", alpha=0.25) +
scale_fill_viridis_d(option = "D") +
labs(title = "Bar Chart with Different Colors",
x = "Furnishing Status",
y = "Count") +
theme_minimal() +
theme(axis.line = element_line(color = "#000000"))
We can see, that most of the houses are semi-furnished. which is also logical, because when we sell a house or apartment, probably we would take in most of the cases the most valuable things for us and furniture included.
Now, it would be great, to look at price and area distribution in differently furnished properties
ggplot(data = dt_houses, aes(y = price, x = area)) +
geom_point(data = dt_houses, aes(y = price, x = area, color = bedrooms)) +
geom_hline(yintercept = mean(dt_houses$price), linetype='dashed') +
facet_grid(.~furnishingstatus) +
scale_y_continuous(labels = label_number(scale = 1e-6, suffix = "M")) +
scale_color_distiller(type = "seq", palette = "Greens") +
theme_minimal() +
theme(axis.line = element_line(color = "#000000"))
Also, on average, you can notice, that unfurnished houses, are less expensive.
We can also take a look on some pie charts:
dt_mainroad_counts <- as.data.frame(table(dt_houses$mainroad)) #table() - creates frequency table
colnames(dt_mainroad_counts) <- c("mainroad_status", "count")
dt_mainroad_counts$percentage <- round(dt_mainroad_counts$count / sum(dt_mainroad_counts$count) * 100, 1)
ggplot(data = dt_mainroad_counts, aes(x = "", y = count, fill = mainroad_status)) +
geom_bar(stat = "identity", width = 1, color = "white") +
coord_polar("y", start = 0) +
geom_text(aes(label = paste0(percentage, "%")),
position = position_stack(vjust = 0.5), color = "white", size = 4) +
theme_void() +
scale_fill_manual(values = c("#F1B147", "#47B1F1")) +
labs(
title = "Distribution of Mainroad Status",
fill = "Mainroad Status"
)
Almost 86 percent of houses have main road, so maybe this won’t be a strong predictor variable.
dt_airconditioning_counts <- as.data.frame(table(dt_houses$airconditioning)) #table() - creates frequency table
colnames(dt_airconditioning_counts) <- c("airconditioning_status", "count")
dt_airconditioning_counts$percentage <- round(dt_airconditioning_counts$count / sum(dt_airconditioning_counts$count) * 100, 1)
ggplot(data = dt_airconditioning_counts, aes(x = "", y = count, fill = airconditioning_status)) +
geom_bar(stat = "identity", width = 1, color = "white") +
coord_polar("y", start = 0) +
geom_text(aes(label = paste0(percentage, "%")),
position = position_stack(vjust = 0.5), color = "white", size = 4) +
theme_void() +
scale_fill_manual(values = c("#F1B147", "#47B1F1")) +
labs(
title = "Distribution of Airconditioning status",
fill = "Airconditioning Status"
)
Here 68.4 percent has airconditioning, but I do not know, how it will affect predictions.
I think that would be enough exploration and we can start with our first model.
First, I would like to start pretty simple with linear model.
I consider to take all variables to my model, because they all seem to be very important.
But before we start, I want to introduce a data frame, which will be very usefull in the end of this course work.
dt_features_performance <- data.table("price_lm_rmse" = c(0, 0, 0, 0, 0), "price_tree_rmse" = c(0, 0, 0, 0, 0), "feature" = c(0, 1, 2, 3, 4))
I will use lm function in R to find needed beta coefficients and create my model
price_lm <- lm(formula = price ~ area + bedrooms + hotwaterheating + airconditioning + stories + mainroad + parking + furnishingstatus + bathrooms + guestroom + basement + prefarea, data = dt_houses)
summary(price_lm)
Call:
lm(formula = price ~ area + bedrooms + hotwaterheating + airconditioning +
stories + mainroad + parking + furnishingstatus + bathrooms +
guestroom + basement + prefarea, data = dt_houses)
Residuals:
Min 1Q Median 3Q Max
-2619718 -657322 -68409 507176 5166695
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 42771.69 264313.31 0.162 0.871508
area 244.14 24.29 10.052 < 2e-16 ***
bedrooms 114787.56 72598.66 1.581 0.114445
hotwaterheatingyes 855447.15 223152.69 3.833 0.000141 ***
airconditioningyes 864958.31 108354.51 7.983 8.91e-15 ***
stories 450848.00 64168.93 7.026 6.55e-12 ***
mainroadyes 421272.59 142224.13 2.962 0.003193 **
parking 277107.10 58525.89 4.735 2.82e-06 ***
furnishingstatussemi-furnished -46344.62 116574.09 -0.398 0.691118
furnishingstatusunfurnished -411234.39 126210.56 -3.258 0.001192 **
bathrooms 987668.11 103361.98 9.555 < 2e-16 ***
guestroomyes 300525.86 131710.22 2.282 0.022901 *
basementyes 350106.90 110284.06 3.175 0.001587 **
prefareayes 651543.80 115682.34 5.632 2.89e-08 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1068000 on 531 degrees of freedom
Multiple R-squared: 0.6818, Adjusted R-squared: 0.674
F-statistic: 87.52 on 13 and 531 DF, p-value: < 2.2e-16
We got 0.68 R-squared, which is not that bad for a model just made up. But that’s not all, I will try to do better here, but first, another model.
But I would like to measure performance of my models with RMSE, so I will calculate MSE for linear model.
price_lm_rmse <- mean(sqrt(abs(price_lm$residuals)))
price_lm_rmse
[1] 797.382
I think this model could perform better, because there some variables which can affect this model not only linearly, but the other way, in this case tree model can show better performance
prices_tree <- rpart(data = dt_houses, formula = price ~ area + bedrooms + hotwaterheating + airconditioning + stories + mainroad + parking + furnishingstatus + bathrooms + guestroom + basement + prefarea, method = 'anova')
prp(prices_tree, digits = -3)
printcp(prices_tree)
Regression tree:
rpart(formula = price ~ area + bedrooms + hotwaterheating + airconditioning +
stories + mainroad + parking + furnishingstatus + bathrooms +
guestroom + basement + prefarea, data = dt_houses, method = "anova")
Variables actually used in tree construction:
[1] airconditioning area basement bathrooms furnishingstatus parking
Root node error: 1.9032e+15/545 = 3.4921e+12
n= 545
CP nsplit rel error xerror xstd
1 0.304946 0 1.00000 1.00376 0.085227
2 0.094553 1 0.69505 0.73092 0.064222
3 0.053743 2 0.60050 0.62882 0.055156
4 0.026381 3 0.54676 0.61111 0.053610
5 0.024922 4 0.52038 0.61289 0.054002
6 0.022993 5 0.49546 0.60986 0.055433
7 0.021374 6 0.47246 0.60792 0.056123
8 0.015261 7 0.45109 0.55950 0.049829
9 0.013952 8 0.43583 0.55840 0.050791
10 0.012386 9 0.42188 0.55543 0.050946
11 0.010000 10 0.40949 0.53421 0.048304
Now I have built with the help of rpart tree model based on my dataset, let explore it:
prices_tree
n= 545
node), split, n, deviance, yval
* denotes terminal node
1) root 545 1.903208e+15 4766729
2) area< 5954 361 6.066751e+14 4029993
4) bathrooms< 1.5 293 3.297298e+14 3773561
8) area< 4016 174 1.437122e+14 3431227
16) furnishingstatus=unfurnished 78 4.036605e+13 2977962 *
17) furnishingstatus=furnished,semi-furnished 96 7.430067e+13 3799505 *
9) area>=4016 119 1.358098e+14 4274118 *
5) bathrooms>=1.5 68 1.746610e+14 5134912
10) airconditioning=no 44 7.024826e+13 4563682 *
11) airconditioning=yes 24 6.373358e+13 6182167 *
3) area>=5954 184 7.161564e+14 6212174
6) bathrooms< 1.5 108 2.869179e+14 5382579
12) airconditioning=no 65 1.170629e+14 4843569
24) basement=no 38 5.226335e+13 4304816 *
25) basement=yes 27 3.824662e+13 5601815 *
13) airconditioning=yes 43 1.224240e+14 6197360 *
7) bathrooms>=1.5 76 2.492851e+14 7391072
14) parking< 1.5 51 7.184700e+13 6859794 *
15) parking>=1.5 25 1.336772e+14 8474878
30) airconditioning=no 10 5.146311e+13 7285600 *
31) airconditioning=yes 15 5.864106e+13 9267729 *
We can see, that we have 31 Nodes, I think for this kind of dataset it may be okay.
Now it would be greate to prune the tree, because I do not want my tree to overfit:
plotcp(prices_tree)
This is complexity of this tree. We need the lowest complexity, to get as few leafs as possible to get the best performance, so that tree won’t overfit the data.
prices_tree_min_cp <- prices_tree$cptable[which.min(prices_tree$cptable[, "xerror"]), "CP"]
model_tree <- prune(prices_tree, cp = prices_tree_min_cp )
prp(prices_tree,digits = -3)
after we pruned the tree, let’s calculate the RMSE for the tree model
prices_tree_pred <- predict(prices_tree, dt_houses[, c("area","bathrooms", "bedrooms", "hotwaterheating", "airconditioning", "parking", "stories", "mainroad", "furnishingstatus", "guestroom", "basement", "prefarea")])
prices_tree_rmse <- mean(sqrt(abs(dt_houses$price - prices_tree_pred)))
prices_tree_rmse
[1] 860.0223
price linear model has a MSE of
price_lm_rmse
[1] 797.382
price tree model has a MSE of
prices_tree_rmse
[1] 860.0223
It is surprising for me, as for a person who does not have a lot of experience in modelling, that linear model performs better than tree model by approx. 7.28%.
100 - price_lm_rmse / prices_tree_rmse * 100
[1] 7.283574
collecting data for my statistics in the end
dt_features_performance$price_lm_rmse[dt_features_performance$feature == 0] <- price_lm_rmse
dt_features_performance$price_tree_rmse[dt_features_performance$feature == 0] <- prices_tree_rmse
Here I would like to try all ideas and observations, which I’ve had through my course work.
I’ve seen two columns, such as “bedrooms” and “bathrooms”, they store numerical value, amount of this kind of rooms. It makes sense for me to create a new column “room_count”, because it may have bigger impact on the performance.
dt_houses[, 'room_count' := bathrooms + bedrooms]
Let’s try Model with a new variable
price_lm <- lm(formula = price ~ area + bedrooms + hotwaterheating + airconditioning + stories + mainroad + parking + furnishingstatus + bathrooms + guestroom + basement + prefarea + room_count, data = dt_houses)
summary(price_lm)
Call:
lm(formula = price ~ area + bedrooms + hotwaterheating + airconditioning +
stories + mainroad + parking + furnishingstatus + bathrooms +
guestroom + basement + prefarea + room_count, data = dt_houses)
Residuals:
Min 1Q Median 3Q Max
-2619718 -657322 -68409 507176 5166695
Coefficients: (1 not defined because of singularities)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 42771.69 264313.31 0.162 0.871508
area 244.14 24.29 10.052 < 2e-16 ***
bedrooms 114787.56 72598.66 1.581 0.114445
hotwaterheatingyes 855447.15 223152.69 3.833 0.000141 ***
airconditioningyes 864958.31 108354.51 7.983 8.91e-15 ***
stories 450848.00 64168.93 7.026 6.55e-12 ***
mainroadyes 421272.59 142224.13 2.962 0.003193 **
parking 277107.10 58525.89 4.735 2.82e-06 ***
furnishingstatussemi-furnished -46344.62 116574.09 -0.398 0.691118
furnishingstatusunfurnished -411234.39 126210.56 -3.258 0.001192 **
bathrooms 987668.11 103361.98 9.555 < 2e-16 ***
guestroomyes 300525.86 131710.22 2.282 0.022901 *
basementyes 350106.90 110284.06 3.175 0.001587 **
prefareayes 651543.80 115682.34 5.632 2.89e-08 ***
room_count NA NA NA NA
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1068000 on 531 degrees of freedom
Multiple R-squared: 0.6818, Adjusted R-squared: 0.674
F-statistic: 87.52 on 13 and 531 DF, p-value: < 2.2e-16
price_lm_rmse <- mean(sqrt(abs(price_lm$residuals)))
price_lm_rmse
[1] 797.382
this is absolutely the same. We can see, that room_count has NA, that means, this variable do not make this model any better.
prices_tree <- rpart(data = dt_houses, formula = price ~ area + bedrooms + hotwaterheating + airconditioning + stories + mainroad + parking + furnishingstatus + bathrooms + guestroom + basement + prefarea + room_count, method = 'anova')
prp(prices_tree, digits = -3)
I think, that in feature enginieering, I won’t plot any tree complexity and explore tree itself, because here the main focus is on the benchmarking and comparing two models with new features. Let’s prune the model and measure RMSE
prices_tree_min_cp <- prices_tree$cptable[which.min(prices_tree$cptable[, "xerror"]), "CP"]
model_tree <- prune(prices_tree, cp = prices_tree_min_cp )
prp(prices_tree,digits = -3)
pruning is done, now the moment of truth, will the tree model with a new room_count feature perform better:
prices_tree_pred <- predict(prices_tree, dt_houses[, c("area","bathrooms", "bedrooms", "hotwaterheating", "airconditioning", "parking", "stories", "mainroad", "furnishingstatus", "guestroom", "basement", "prefarea", "room_count")])
prices_tree_rmse <- mean(sqrt(abs(dt_houses$price - prices_tree_pred)))
prices_tree_rmse
[1] 850.5616
and it does perform better indeed. Approx 1.1% better performance.
100 - prices_tree_rmse / 860.0223 * 100
[1] 1.100051
dt_features_performance$price_lm_rmse[dt_features_performance$feature == 1] <- price_lm_rmse
dt_features_performance$price_tree_rmse[dt_features_performance$feature == 1] <- prices_tree_rmse
With new ‘room_count’ feature, linear model performs the same: 797.381 - RMSE On the other side tree model with a brand new feature has improved it’s benchmark for 1.1%.
Linear model still better, but may be there is some chances, we have 3 more features.
what if we will try to bring the area variable closer to Gaussian with log transformation, because area density is skewed to the left, log transformation can help us to normalize the variable.
dt_houses[, area_log := log(area)]
little visualization:
ggplot(data = dt_houses, aes(x = area_log)) +
geom_density(fill="#f1b147", color="#f1b147", alpha=0.25) +
labs(
x = 'Price',
y = 'Density'
) +
theme_minimal() +
theme(axis.line = element_line(color = "#000000"))
and try model again :)
price_lm <- lm(formula = price ~ area + bedrooms + hotwaterheating + airconditioning + stories + mainroad + parking + furnishingstatus + bathrooms + guestroom + basement + prefarea + room_count + area_log, data = dt_houses)
summary(price_lm)
Call:
lm(formula = price ~ area + bedrooms + hotwaterheating + airconditioning +
stories + mainroad + parking + furnishingstatus + bathrooms +
guestroom + basement + prefarea + room_count + area_log,
data = dt_houses)
Residuals:
Min 1Q Median 3Q Max
-2607115 -665756 -73006 497325 5120891
Coefficients: (1 not defined because of singularities)
Estimate Std. Error t value Pr(>|t|)
(Intercept) -8.716e+06 3.455e+06 -2.523 0.011936 *
area 4.404e+01 8.233e+01 0.535 0.592912
bedrooms 1.175e+05 7.224e+04 1.627 0.104283
hotwaterheatingyes 8.585e+05 2.220e+05 3.867 0.000124 ***
airconditioningyes 8.214e+05 1.092e+05 7.525 2.28e-13 ***
stories 4.475e+05 6.386e+04 7.007 7.41e-12 ***
mainroadyes 3.471e+05 1.445e+05 2.403 0.016608 *
parking 2.689e+05 5.832e+04 4.612 5.01e-06 ***
furnishingstatussemi-furnished -7.058e+04 1.164e+05 -0.607 0.544418
furnishingstatusunfurnished -4.288e+05 1.258e+05 -3.410 0.000699 ***
bathrooms 9.814e+05 1.029e+05 9.540 < 2e-16 ***
guestroomyes 2.419e+05 1.331e+05 1.818 0.069629 .
basementyes 3.678e+05 1.099e+05 3.345 0.000880 ***
prefareayes 6.727e+05 1.154e+05 5.830 9.66e-09 ***
room_count NA NA NA NA
area_log 1.169e+06 4.596e+05 2.542 0.011290 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1062000 on 530 degrees of freedom
Multiple R-squared: 0.6856, Adjusted R-squared: 0.6773
F-statistic: 82.57 on 14 and 530 DF, p-value: < 2.2e-16
price_lm_rmse <- mean(sqrt(abs(price_lm$residuals)))
price_lm_rmse
[1] 793.388
yes! It makes less errors. Previos we had RMSE of 797.382, now it is 793.388. Also 0.5% performance improvement.
100 - price_lm_rmse / 797.382 * 100
[1] 0.5008837
prices_tree <- rpart(data = dt_houses, formula = price ~ area + bedrooms + hotwaterheating + airconditioning + stories + mainroad + parking + furnishingstatus + bathrooms + guestroom + basement + prefarea + room_count + area_log, method = 'anova')
prp(prices_tree, digits = -3)
Now prunning again
prices_tree_min_cp <- prices_tree$cptable[which.min(prices_tree$cptable[, "xerror"]), "CP"]
model_tree <- prune(prices_tree, cp = prices_tree_min_cp )
prp(prices_tree,digits = -3)
and calculating the error
prices_tree_pred <- predict(prices_tree, dt_houses[, c("area","bathrooms", "bedrooms", "hotwaterheating", "airconditioning", "parking", "stories", "mainroad", "furnishingstatus", "guestroom", "basement", "prefarea", "room_count", "area_log")])
prices_tree_rmse <- mean(sqrt(abs(dt_houses$price - prices_tree_pred)))
prices_tree_rmse
[1] 850.5616
Yep, there is no gain in performance, and I could probably say why. Linear model gains performance when we normalize variables, because this algorithm is sensitive to Gaussian, but the tree model, does not “care” so much about density of the variables, because it does not calculate “distance” between points. This is my prediction, but I could be also wrong, I did not google it, because it is more interesting to try to think about concepts before you get an answer
So there are RMSE from linear model: 793.388 and RMSE from tree: 850.561. Linear model is still better :)
dt_features_performance$price_lm_rmse[dt_features_performance$feature == 2] <- price_lm_rmse
dt_features_performance$price_tree_rmse[dt_features_performance$feature == 2] <- prices_tree_rmse
I think, this could be a good Idea to take a loot at a correlation between variables, but from Data exploration I can already say, that area correlates with price.
Here we are, correlation plot:
ggcorrplot(corr = cor(dt_houses[, .(price, area, bedrooms, bathrooms, stories, parking)]),
hc.order = TRUE,
lab = TRUE)
Hm, correlation plot does not look as great, as I have expected, but the strongest correlation with price is area and amount of bathrooms.
I got an Idea, we have bathrooms, and they are in range from 1 to 4.What if we will treat each amount of bathrooms as a factor variable. Because it is possible that home with 2 bathrooms is drastically more expensive than a house with 1, and the one with 3 bathrooms is super costly
# creating factor
dt_houses[, count_bathrooms_1 := 0][bathrooms == 1, count_bathrooms_1 := 1]
dt_houses[, count_bathrooms_2 := 0][bathrooms == 2, count_bathrooms_2 := 1]
dt_houses[, count_bathrooms_3 := 0][bathrooms == 3, count_bathrooms_3 := 1]
dt_houses[, count_bathrooms_4 := 0][bathrooms == 4, count_bathrooms_4 := 1]
price_lm <- lm(formula = price ~ area + bedrooms + hotwaterheating + airconditioning + stories + mainroad + parking + furnishingstatus + bathrooms + guestroom + basement + prefarea + room_count + area_log + count_bathrooms_1 + count_bathrooms_2 + count_bathrooms_3 + count_bathrooms_4, data = dt_houses)
summary(price_lm)
Call:
lm(formula = price ~ area + bedrooms + hotwaterheating + airconditioning +
stories + mainroad + parking + furnishingstatus + bathrooms +
guestroom + basement + prefarea + room_count + area_log +
count_bathrooms_1 + count_bathrooms_2 + count_bathrooms_3 +
count_bathrooms_4, data = dt_houses)
Residuals:
Min 1Q Median 3Q Max
-2621190 -644381 -71750 495480 5189707
Coefficients: (3 not defined because of singularities)
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.389e+07 4.911e+06 -2.827 0.004873 **
area 2.858e+01 8.292e+01 0.345 0.730510
bedrooms 1.222e+05 7.219e+04 1.693 0.091082 .
hotwaterheatingyes 8.739e+05 2.218e+05 3.941 9.21e-05 ***
airconditioningyes 8.293e+05 1.094e+05 7.583 1.53e-13 ***
stories 4.479e+05 6.402e+04 6.995 8.05e-12 ***
mainroadyes 3.481e+05 1.442e+05 2.414 0.016117 *
parking 2.607e+05 5.838e+04 4.465 9.78e-06 ***
furnishingstatussemi-furnished -7.002e+04 1.167e+05 -0.600 0.548761
furnishingstatusunfurnished -4.336e+05 1.261e+05 -3.439 0.000629 ***
bathrooms 2.573e+06 1.128e+06 2.281 0.022944 *
guestroomyes 2.461e+05 1.329e+05 1.851 0.064694 .
basementyes 3.747e+05 1.098e+05 3.413 0.000692 ***
prefareayes 6.856e+05 1.154e+05 5.942 5.13e-09 ***
room_count NA NA NA NA
area_log 1.247e+06 4.623e+05 2.697 0.007215 **
count_bathrooms_1 2.992e+06 2.379e+06 1.258 0.209018
count_bathrooms_2 1.309e+06 1.277e+06 1.025 0.306020
count_bathrooms_3 NA NA NA NA
count_bathrooms_4 NA NA NA NA
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1061000 on 528 degrees of freedom
Multiple R-squared: 0.688, Adjusted R-squared: 0.6785
F-statistic: 72.75 on 16 and 528 DF, p-value: < 2.2e-16
price_lm_rmse <- mean(sqrt(abs(price_lm$residuals)))
price_lm_rmse
[1] 788.9308
And we gain a little bit more performance. This is really great. Approx 1.05% better than the first model without features. But not every factor is used, may be there is a big difference between 1 and 2 bathrooms, That may be because the luck of data, because I have less than 5 units with 3 or 4 bathrooms overall in my dataset.
100 - 788.9308 / 797.382 * 100
[1] 1.059868
Let us try tree model now.
prices_tree <- rpart(data = dt_houses, formula = price ~ area + bedrooms + hotwaterheating + airconditioning + stories + mainroad + parking + furnishingstatus + bathrooms + guestroom + basement + prefarea + room_count + area_log + count_bathrooms_1 + count_bathrooms_2 + count_bathrooms_3 + count_bathrooms_4, method = 'anova')
prp(prices_tree, digits = -3)
cleane it up
prices_tree_min_cp <- prices_tree$cptable[which.min(prices_tree$cptable[, "xerror"]), "CP"]
model_tree <- prune(prices_tree, cp = prices_tree_min_cp )
prp(prices_tree,digits = -3)
calculating error:
prices_tree_pred <- predict(prices_tree, dt_houses[, c("area","bathrooms", "bedrooms", "hotwaterheating", "airconditioning", "parking", "stories", "mainroad", "furnishingstatus", "guestroom", "basement", "prefarea", "room_count", "area_log", "count_bathrooms_1", "count_bathrooms_2", "count_bathrooms_3", "count_bathrooms_4")])
prices_tree_rmse <- mean(sqrt(abs(dt_houses$price - prices_tree_pred)))
prices_tree_rmse
[1] 832.6665
This is awesome, we are making ~ 17.8951 less errors, this is almost 3.2% less errors.
100 - 832.6666 / 860.0223 * 100
[1] 3.180813
This becomes interesting. While linear model has improved by 1.05%, tree model made bigger gain in performance: ~3.2%. This is 3 times linear model gains.
dt_features_performance$price_lm_rmse[dt_features_performance$feature == 3] <- price_lm_rmse
dt_features_performance$price_tree_rmse[dt_features_performance$feature == 3] <- prices_tree_rmse
This could be the case, because if this dataset was gathered from a hot area, where summer is usually very warm, airconditioning could be very important factor, while buying a house and it will make place with it more attractive, but when there is not any, it could make place worse.
For example, if there is airconditioning there could be Beta = x, but if there is not, it is not 0, it is -y value from the property.
Let’s do this
# creating factors
dt_houses[, airconditioning_yes := 0][airconditioning == 'yes', airconditioning_yes := 1]
dt_houses[, airconditioning_no := 0][airconditioning == 'no', airconditioning_no := 1]
# calculating and running model
price_lm <- lm(formula = price ~ area + bedrooms + hotwaterheating + airconditioning + stories + mainroad + parking + furnishingstatus + bathrooms + guestroom + basement + prefarea + room_count + area_log + count_bathrooms_1 + count_bathrooms_2 + count_bathrooms_3 + count_bathrooms_4 + airconditioning_yes + airconditioning_no, data = dt_houses)
summary(price_lm)
Call:
lm(formula = price ~ area + bedrooms + hotwaterheating + airconditioning +
stories + mainroad + parking + furnishingstatus + bathrooms +
guestroom + basement + prefarea + room_count + area_log +
count_bathrooms_1 + count_bathrooms_2 + count_bathrooms_3 +
count_bathrooms_4 + airconditioning_yes + airconditioning_no,
data = dt_houses)
Residuals:
Min 1Q Median 3Q Max
-2621190 -644381 -71750 495480 5189707
Coefficients: (5 not defined because of singularities)
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.389e+07 4.911e+06 -2.827 0.004873 **
area 2.858e+01 8.292e+01 0.345 0.730510
bedrooms 1.222e+05 7.219e+04 1.693 0.091082 .
hotwaterheatingyes 8.739e+05 2.218e+05 3.941 9.21e-05 ***
airconditioningyes 8.293e+05 1.094e+05 7.583 1.53e-13 ***
stories 4.479e+05 6.402e+04 6.995 8.05e-12 ***
mainroadyes 3.481e+05 1.442e+05 2.414 0.016117 *
parking 2.607e+05 5.838e+04 4.465 9.78e-06 ***
furnishingstatussemi-furnished -7.002e+04 1.167e+05 -0.600 0.548761
furnishingstatusunfurnished -4.336e+05 1.261e+05 -3.439 0.000629 ***
bathrooms 2.573e+06 1.128e+06 2.281 0.022944 *
guestroomyes 2.461e+05 1.329e+05 1.851 0.064694 .
basementyes 3.747e+05 1.098e+05 3.413 0.000692 ***
prefareayes 6.856e+05 1.154e+05 5.942 5.13e-09 ***
room_count NA NA NA NA
area_log 1.247e+06 4.623e+05 2.697 0.007215 **
count_bathrooms_1 2.992e+06 2.379e+06 1.258 0.209018
count_bathrooms_2 1.309e+06 1.277e+06 1.025 0.306020
count_bathrooms_3 NA NA NA NA
count_bathrooms_4 NA NA NA NA
airconditioning_yes NA NA NA NA
airconditioning_no NA NA NA NA
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1061000 on 528 degrees of freedom
Multiple R-squared: 0.688, Adjusted R-squared: 0.6785
F-statistic: 72.75 on 16 and 528 DF, p-value: < 2.2e-16
price_lm_rmse <- mean(sqrt(abs(price_lm$residuals)))
price_lm_rmse
[1] 788.9308
there is not any performance upgrade with this feature.
Now here, i think it won’t make any difference, because this feature was more linear model oriented, than tree model oriented, but we will still try it out and compare the results
prices_tree <- rpart(data = dt_houses, formula = price ~ area + bedrooms + hotwaterheating + airconditioning + stories + mainroad + parking + furnishingstatus + bathrooms + guestroom + basement + prefarea + room_count + area_log + count_bathrooms_1 + count_bathrooms_2 + count_bathrooms_3 + count_bathrooms_4 + airconditioning_yes + airconditioning_no, method = 'anova')
prp(prices_tree, digits = -3)
prunning again:
prices_tree_min_cp <- prices_tree$cptable[which.min(prices_tree$cptable[, "xerror"]), "CP"]
model_tree <- prune(prices_tree, cp = prices_tree_min_cp )
prp(prices_tree,digits = -3)
and calculating error:
prices_tree_pred <- predict(prices_tree, dt_houses[, c("area","bathrooms", "bedrooms", "hotwaterheating", "airconditioning", "parking", "stories", "mainroad", "furnishingstatus", "guestroom", "basement", "prefarea", "room_count", "area_log", "count_bathrooms_1", "count_bathrooms_2", "count_bathrooms_3", "count_bathrooms_4", "airconditioning_yes", "airconditioning_no")])
prices_tree_rmse <- mean(sqrt(abs(dt_houses$price - prices_tree_pred)))
prices_tree_rmse
[1] 832.6665
and as expected, this feature did not affect performance.
For both models, there was not any performance gain. Linear model performs better, than tree model on this dataset, but I would like to do small Plot, to finish this course work.
dt_features_performance$price_lm_rmse[dt_features_performance$feature == 4] <- price_lm_rmse
dt_features_performance$price_tree_rmse[dt_features_performance$feature == 4] <- prices_tree_rmse
Now when I have my data, this is my conclusion plot:
ggplot() +
geom_point(data = dt_features_performance, aes(x = feature, y = price_lm_rmse),
size = 4, color = "#1f77b4", alpha = 0.8) +
geom_line(data = dt_features_performance, aes(x = feature, y = price_lm_rmse),
color = "#1f77b4", linewidth = 1) +
geom_point(data = dt_features_performance, aes(x = feature, y = price_tree_rmse),
size = 4, color = "#ff7f0e", alpha = 0.8) +
geom_line(data = dt_features_performance, aes(x = feature, y = price_tree_rmse),
color = "#ff7f0e", linewidth = 1) +
labs(title = "Performance with Amount of Features",
x = "Amount of Features",
y = "Performance (RMSE)") +
theme_minimal() +
theme(
axis.line = element_line(color = "#000000"),
text = element_text(size = 14),
plot.title = element_text(size = 16, face = "bold", hjust = 0.5)
)
So now we can observe that with more features overall both models could perform better, but for this dataset and my implementation the linear model performs better, but I thought that the tree model will perform much better. In conclusion, I would like to mention, that tree model is lower in performance, but we achived more boost by introducing new features, than by linear model.